-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
single precision support in skimage.restoration module #5219
Conversation
Hello @grlee77! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:
Comment last updated at 2021-05-21 14:32:11 UTC |
aeeca4e
to
6b746f6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work 👍
I've added a bunch of comments, but most of them are a repetition of what I commented earlier, just to put it in every place that I found that may need a change.
In general, I wonder if functions should return the same dtype
as the input image, or if they should return the dtype
in which the processing was done. E.g., if the input is FP16, should it return FP32 or FP16?
img = 155 * np.ones((100, 100), dtype=dtype) | ||
kernel = ellipsoid_kernel((25, 53), 50).astype(dtype, copy=False) | ||
background = rolling_ball(img, kernel=kernel) | ||
assert background.dtype == img.dtype |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will always be true, because we cast background
to img.dtype
after applying the kernel. The question is if we want that or if it should be background.dtype == np.float32
if img is F16.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have left the dtype behavior in rolling_ball as-is for now. I think this was mainly as a convenience for users providing integer valued inputs to get integer valued outputs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. We introduced this because of integer images - I never wrote an optimized integer kernel -, but it affects all inputs equally. The general idea was "dtype in, dtype out".
I think it will be best if the behavior here regarding the return type is consistent with the other functions in the library. If that means changing the current behavior, I think it makes sense to do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this concern is orthogonal to the purpose if this PR, so let's just defer this decision so as not to hold the PR up unnecessarily.
Here is a proposed 1.) optionally raise ValueError if the input is complex-valued I added float64 and float128 to the new_float_type = {
# preserved types
np.float32().dtype.char : np.float32,
np.float64().dtype.char : np.float64,
np.complex64().dtype.char : np.complex64,
np.complex128().dtype.char : np.complex128,
# altered types
np.float16().dtype.char : np.float32,
'g' : np.float64 # np.float128 ; doesn't exist on windows
'G' : np.complex128 # np.complex256 ; doesn't exist on windows
}
def _supported_float_type(dtype, allow_complex=False):
"""Return an appropriate floating-point dtype for a given dtype.
float32, float64, complex64, complex128 are preserved.
float16 is promoted to float32.
complex256 is demoted to complex128.
Other types are cast to float64.
Paramters
---------
dtype : np.dtype or Iterable of np.dtype
The input dtype. If a sequence of multiple dtypes is provided, each
dtype is first converted to a supported floating point type and the
final dtype is then determined by applying `np.result_type` on the
sequence of supported floating point types.
allow_complex : bool, optional
If False, raise a ValueError on complex-valued inputs.
Retruns
-------
float_type : dtype
Floating-point dtype for the image.
"""
if isinstance(dtype, Iterable) and not isinstance(dtype, str):
return np.result_type(*(_supported_float_type(d) for d in dtype))
dtype = np.dtype(dtype)
if not allow_complex and dtype.kind == 'c':
raise ValueError("complex valued input is not supported")
return new_float_type.get(dtype.char, np.float64) |
6b746f6
to
daef834
Compare
_float_type -> _supported_float_type test with additional dtypes (float16, float128) suppress RuntimeWarnings in richardson_lucy
5935e71
to
f1cb2d6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @grlee77! I've only made a minor and super optional code style comment.
@FirefoxMetzger identified a test that is possibly-not-comprehensive-enough here, and I also wonder whether you've overlooked that?
img = 155 * np.ones((100, 100), dtype=dtype) | ||
kernel = ellipsoid_kernel((25, 53), 50).astype(dtype, copy=False) | ||
background = rolling_ball(img, kernel=kernel) | ||
assert background.dtype == img.dtype |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this concern is orthogonal to the purpose if this PR, so let's just defer this decision so as not to hold the PR up unnecessarily.
(But note that I'd be happy to merge this as-is in the interest of keeping momentum going with this series of PRs.) |
@grlee77 do you want to pick this one up again? |
add dtype-dependent rtol to test_inpaint_biharmonic_2d
- use a test image with more structure - fix the random seed - tune denoising parameters a bit With this updates the difference between 3D vs 2D is more pronounced. test float16 dtype
Sure, I have addressed the initial comments and rebased. I updated one of the non-local means test cases, which happened to give almost the same PSNR for the two cases being compared so that it has a more suitable test object than a simple square. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @grlee77, that's amazing! I just left really minor comments, otherwise it is ready to merge 😉
Co-authored-by: Riadh Fezzani <rfezzani@gmail.com>
… into float32_restoration
Thank you @grlee77 🎉 |
Thanks @rfezzani, I have implemented your suggestions and CI is green. |
Nevermind, I see you merged just before I added the comment! Thanks for the detailed review. |
This pull request has been mentioned on Image.sc Forum. There might be relevant details there: |
Description
This PR adds float32 support to
wiener
,unsupervised_wiener
, androlling_ball
. Some tests for the expected dtype were added to other functions that already hadfloat32
support.Here I introduced a
_float_dtype
helper in_shared.utils
to try to keep casting to floating point types consistent. Probably #5204 could use this as well.Checklist
./doc/examples
(new features only)./benchmarks
, if your changes aren't covered by anexisting benchmark
For reviewers
later.
__init__.py
.doc/release/release_dev.rst
.